|
In mathematics, the entropy power inequality is a result in information theory that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary. ==Statement of the inequality== For a random variable ''X'' : Ω → R''n'' with probability density function ''f'' : R''n'' → R, the differential entropy of ''X'', denoted ''h''(''X''), is defined to be : and the entropy power of ''X'', denoted ''N''(''X''), is defined to be : In particular, ''N''(''X'') = |''K''| 1/''n'' when ''X'' is normal distributed with covariance matrix ''K''. Let ''X'' and ''Y'' be independent random variables with probability density functions in the ''L''''p'' space ''L''''p''(R''n'') for some ''p'' > 1. Then : Moreover, equality holds if and only if ''X'' and ''Y'' are multivariate normal random variables with proportional covariance matrices. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Entropy power inequality」の詳細全文を読む スポンサード リンク
|